Goto

Collaborating Authors

 Spam Filtering


'100 Video Calls Per Day': Models Are Applying to Be the Face of AI Scams

WIRED

'100 Video Calls Per Day': Models Are Applying to Be the Face of AI Scams Dozens of Telegram channels reviewed by WIRED include job listings for "AI face models." The (mostly) women who land these gigs are likely being used to dupe victims out of their money. "I can speak fluent English, I can speak good Chinese, I also speak Russian and Turkish," the glamorous, 24-year-old Uzbekistani woman explains in a selfie-style video made for recruiters. Angel had arrived in the Cambodian city of Sihanoukville that day, she said, and was ready to start work immediately. Those impressive language skills, however, have likely been put to use as part of elaborate " pig-butchering " scams targeting Americans.



AI helps scam centers evade crackdown in Asia and dupe more victims

The Japan Times

Shwe Kokko city, a casino, entertainment, and tourism complex,from Thailand's side of the border after Bangkok said it would suspend electricity supply to some border areas with Myanmar to try to curb scam centers, in the Mae Sot district, Thailand, on Feb. 5, 2025 | REUTERS Criminals in Southeast Asia are harnessing inexpensive artificial intelligence tools to target bigger pools of potential victims at high speed, keeping scam centers humming even as governments try and crack down, senior officials at Interpol say. Previously, some scams were easy to spot -- from poor quality online ads luring people to work in such centers to the scams themselves, typically designed to make people part with their money through the promise of romance or investment returns. Now, scammers are using large language models and other AI tools to make their cons more sophisticated. Artificial intelligence also allows them to change course quickly, shifting to newer targets and from fresh locations. In a time of both misinformation and too much information, quality journalism is more crucial than ever.


Scammers in China Are Using AI-Generated Images to Get Refunds

WIRED

From dead crabs to shredded bed sheets, fraudsters are using fake photos and videos to get their money back from ecommerce sites. I don't want to admit it, but I did spend a lot of money online this holiday shopping season. And unsurprisingly, some of those purchases didn't meet my expectations. A photobook I bought was damaged in transit, so I snapped a few pictures, emailed them to the merchant, and got a refund. Online shopping platforms have long depended on photos submitted by customers to confirm that refund requests are legitimate.


The fake refund scam: Why scammers love holiday shoppers

FOX News

Data brokers sell personal shopping information to scammers who craft convincing fake refund messages during peak holiday purchasing periods.


How to help older relatives with tech over the holidays

FOX News

Essential tech support tips for older adults including password management, two-factor authentication and simple device fixes to prevent future technology problems.


How Norton is helping to block the latest AI scams

PCWorld

Norton's award-winning security software makes it simple to keep you and your family safe from increasingly sophisticated digital threats. AI scams are getting more sophisticated. Here's how Norton is fighting back to protect you and your family. It feels like overnight AI has ended up just about everywhere. From deepfakes and ChatGPT homework, to em-dashes and political misinformation, keeping on top of the latest AI trends is almost impossible.


Black Friday traps! New AI scams are plaguing this shopping season

PCWorld

When you purchase through links in our articles, we may earn a small commission. The holiday season is just around the corner, but those sales are now accompanied by new AI scams that want to steal your money and data. Black Friday started as a day for bargain sales, but has evolved into a global shopping phenomenon. But the bigger the participation, the more attractive it all becomes for criminals. According to Austrian fact-checker Mimikama, security researchers have been observing a new wave of scam attempts made possible by generative AI: fake shops that look deceptively real, deepfake videos featuring celebrities, and phishing attacks via social media and text messages.


Balancing Quality and Variation: Spam Filtering Distorts Data Label Distributions

Fleisig, Eve, Orlikowski, Matthias, Cimiano, Philipp, Klein, Dan

arXiv.org Artificial Intelligence

For machine learning datasets to accurately represent diverse opinions in a population, they must preserve variation in data labels while filtering out spam or low-quality responses. How can we balance annotator reliability and representation? We empirically evaluate how a range of heuristics for annotator filtering affect the preservation of variation on subjective tasks. We find that these methods, designed for contexts in which variation from a single ground-truth label is considered noise, often remove annotators who disagree instead of spam annotators, introducing suboptimal tradeoffs between accuracy and label diversity. We find that conservative settings for annotator removal (<5%) are best, after which all tested methods increase the mean absolute error from the true average label. We analyze performance on synthetic spam to observe that these methods often assume spam annotators are more random than real spammers tend to be: most spammers are distributionally indistinguishable from real annotators, and the minority that are distinguishable tend to give relatively fixed answers, not random ones. Thus, tasks requiring the preservation of variation reverse the intuition of existing spam filtering methods: spammers tend to be less random than non-spammers, so metrics that assume variation is spam fare worse. These results highlight the need for spam removal methods that account for label diversity.


How to stop impostor bank scams before they drain your wallet

FOX News

The Federal Trade Commission reports over $2.9 billion in losses from impostor bank scams using caller ID spoofing and AI voice technology to deceive victims.